Paradigm Relative Entropy and Discriminative Learning
نویسندگان
چکیده
The interactive role of intra-paradigmatic and inter-paradigmatic distributions has been investigated in accounting for differential effects on visual lexical recognition for both inflected (Milin et al., 2009a, 2009b) and derived words (see Kuperman et al., 2010; Bertram et al., 2005; Schreuder et al. 2003 among others). In particular, Milin and colleagues focus on the divergence between the distribution of inflectional endings within a single paradigm (measured as the entropy of the distribution of paradigmatically-related forms, or Paradigm Entropy), and the distribution of the same endings within their broader inflectional class (measured as the entropy of the distribution of inflectional endings across all paradigms, or Inflectional Entropy). They conclude that both entropic scores facilitate visual lexical recognition, but if the two distributions differ, a conflict arises, resulting in slower word recognition. Similar results are reported by Kuperman and colleagues (2010) on reading times for Dutch derived words, and are interpreted as reflecting an information imbalance between the family of the base word (e.g. plaats in plaatsing) and the family of the suffix (ing).
منابع مشابه
Minimum Conditional Entropy Clustering: A Discriminative Framework for Clustering
In this paper, we introduce an assumption which makes it possible to extend the learning ability of discriminative model to unsupervised setting. We propose an informationtheoretic framework as an implementation of the low-density separation assumption. The proposed framework provides a unified perspective of Maximum Margin Clustering (MMC), Discriminative k -means, Spectral Clustering and Unsu...
متن کاملEfficient Approximation of the Conditional Relative Entropy with Applications to Discriminative Learning of Bayesian Network Classifiers
We propose a minimum variance unbiased approximation to the conditional relative entropy of the distribution induced by the observed frequency estimates, for multi-classification tasks. Such approximation is an extension of a decomposable scoring criterion, named approximate conditional log-likelihood (aCLL), primarily used for discriminative learning of augmented Bayesian network classifiers. ...
متن کاملThe Minimum Information Principle for Discriminative Learning
Exponential models of distributions are widely used in machine learning for classification and modelling. It is well known that they can be interpreted as maximum entropy models under empirical expectation constraints. In this work, we argue that for classification tasks, mutual information is a more suitable information theoretic measure to be optimized. We show how the principle of minimum mu...
متن کاملDiscriminative Learning of Syntactic and Semantic Dependencies
A Maximum Entropy Model based system for discriminative learning of syntactic and semantic dependencies submitted to the CoNLL-2008 shared task (Surdeanu, et al., 2008) is presented in this paper. The system converts the dependency learning task to classification issues and reconstructs the dependent relations based on classification results. Finally F1 scores of 86.69, 69.95 and 78.35 are obta...
متن کاملAn Analogical Paradox for Nonhuman Primates: Bridging the Perceptual-Conceptual Gap
We investigated the role that entropy measures, discriminative cues, and symbolic knowledge play for rhesus monkeys in the acquisition of the concepts of same and different for use in a computerized relational matching-to-sample (RMTS) task. After repeatedly failing to perceive relations between pairs of stimuli in a two-choice discrimination paradigm, monkeys rapidly learned to discriminate be...
متن کامل